Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
1.
ACM Web Conference 2023 - Proceedings of the World Wide Web Conference, WWW 2023 ; : 4060-4064, 2023.
Article in English | Scopus | ID: covidwho-20242469

ABSTRACT

The COVID-19 pandemic has been at the center of the lives of many of us for at least a couple of years, during which periods of isolation and lockdowns were common. How all that affected our mental well-being, especially the ones' who were already in distress? To investigate the matter we analyse the online discussions on Sanctioned Suicide, a forum where users discuss suicide-related topics freely. We collected discussions starting from March 2018 (before pandemic) up to July 2022, for a total of 53K threads with 700K comments and 16K users. We investigate the impact of COVID-19 on the discussions in the forum. The data show that covid, while being present in the discussions, especially during the first lockdown, has not been the main reason why new users registered to the forum. However, covid appears to be indirectly connected to other causes of distress for the users, i.e. anxiety for the economy. © 2023 ACM.

2.
Experimental Ir Meets Multilinguality, Multimodality, and Interaction (Clef 2022) ; 13390:495-520, 2022.
Article in English | Web of Science | ID: covidwho-2094392

ABSTRACT

We describe the fifth edition of the CheckThat! lab, part of the 2022 Conference and Labs of the Evaluation Forum (CLEF). The lab evaluates technology supporting tasks related to factuality in multiple languages: Arabic, Bulgarian, Dutch, English, German, Spanish, and Turkish. Task 1 asks to identify relevant claims in tweets in terms of check-worthiness, verifiability, harmfullness, and attention-worthiness. Task 2 asks to detect previously fact-checked claims that could be relevant to fact-check a new claim. It targets both tweets and political debates/speeches. Task 3 asks to predict the veracity of the main claim in a news article. CheckThat! was the most popular lab at CLEF-2022 in terms of team registrations: 137 teams. More than one-third (37%) of them actually participated: 18, 7, and 26 teams submitted 210, 37, and 126 official runs for tasks 1, 2, and 3, respectively.

3.
44th European Conference on Information Retrieval (ECIR) ; 13186:416-428, 2022.
Article in English | Web of Science | ID: covidwho-1820909

ABSTRACT

The fifth edition of the CheckThat! Lab is held as part of the 2022 Conference and Labs of the Evaluation Forum (CLEF). The lab evaluates technology supporting various factuality tasks in seven languages: Arabic, Bulgarian, Dutch, English, German, Spanish, and Turkish. Task 1 focuses on disinformation related to the ongoing COVID-19 infodemic and politics, and asks to predict whether a tweet is worth fact-checking, contains a verifiable factual claim, is harmful to the society, or is of interest to policy makers and why. Task 2 asks to retrieve claims that have been previously fact-checked and that could be useful to verify the claim in a tweet. Task 3 is to predict the veracity of a news article. Tasks 1 and 3 are classification problems, while Task 2 is a ranking one.

SELECTION OF CITATIONS
SEARCH DETAIL